focus group
AI Literacy for Community Colleges: Instructors' Perspectives on Scenario-Based and Interactive Approaches to Teaching AI
Warrier, Aparna Maya, Agarwal, Arav, Savelka, Jaromir, Bogart, Christopher A, Burte, Heather
This research category full paper investigates how community college instructors evaluate interactive, no-code AI literacy resources designed for non-STEM learners. As artificial intelligence becomes increasingly integrated into everyday technologies, AI literacy - the ability to evaluate AI systems, communicate with them, and understand their broader impacts - has emerged as a critical skill across disciplines. Yet effective, scalable approaches for teaching these concepts in higher education remain limited, particularly for students outside STEM fields. To address this gap, we developed AI User, an interactive online curriculum that introduces core AI concepts through scenario - based activities set in real - world contexts. This study presents findings from four focus groups with instructors who engaged with AI User materials and participated in structured feedback activities. Thematic analysis revealed that instructors valued exploratory tasks that simulated real - world AI use cases and fostered experimentation, while also identifying challenges related to scaffolding, accessibility, and multi-modal support. A ranking task for instructional support materials showed a strong preference for interactive demonstrations over traditional educational materials like conceptual guides or lecture slides. These findings offer insights into instructor perspectives on making AI concepts more accessible and relevant for broad learner audiences. They also inform the design of AI literacy tools that align with diverse teaching contexts and support critical engagement with AI in higher education.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.05)
- North America > United States > Virginia (0.04)
- Research Report (1.00)
- Instructional Material > Course Syllabus & Notes (1.00)
From keywords to semantics: Perceptions of large language models in data discovery
Halstead, Maura E, Green, Mark A., Jay, Caroline, Kingston, Richard, Topping, David, Singleton, Alexander
This matching requires researchers to know the exact wording that other researchers previously used, creating a challenging process that could lead to missing relevant data. Large Language Models (LLMs) could enhance data discovery by removing this requirement and allowing researchers to ask questions with natural language. However, we do not currently know if researchers would accept LLMs for data discovery. Using a human-centered artificial intelligence (HCAI) focus, we ran focus groups (N = 27) to understand researchers' perspectives towards LLMs for data discovery. Our conceptual model shows that the potential benefits are not enough for researchers to use LLMs instead of current technology. Barriers prevent researchers from fully accepting LLMs, but features around transparency could overcome them. Using our model will allow developers to incorporate features that result in an increased acceptance of LLMs for data discovery.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Merseyside > Liverpool (0.04)
- Europe > United Kingdom > England > Greater Manchester > Manchester (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Research Report > Experimental Study (0.95)
- Research Report > New Finding (0.70)
- Health & Medicine (1.00)
- Government (1.00)
- Information Technology (0.68)
- Education > Educational Setting (0.46)
Kindergarten is important, but illness, tears make chronic absenteeism a challenge
Things to Do in L.A. Tap to enable a layout that focuses on the article. Students arrive for the first day of school at 24th Street Elementary School. This is read by an automated voice. Please report any issues or inconsistencies here . Kindergartners have California's highest chronic absenteeism rates, with 26% missing at least 10% of school days in 2023-24.
- North America > United States > California > Los Angeles County > Los Angeles (0.07)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- (4 more...)
Advancing Data Equity: Practitioner Responsibility and Accountability in NLP Data Practices
Cunningham, Jay L., Shao, Kevin Zhongyang, Pang, Rock Yuren, Mengist, Nathaniel
While research has focused on surfacing and auditing algorithmic bias to ensure equitable AI development, less is known about how NLP practitioners - those directly involved in dataset development, annotation, and deployment - perceive and navigate issues of NLP data equity. This study is among the first to center practitioners' perspectives, linking their experiences to a multi-scalar AI governance framework and advancing participatory recommendations that bridge technical, policy, and community domains. Drawing on a 2024 questionnaire and focus group, we examine how U.S.-based NLP data practitioners conceptualize fairness, contend with organizational and systemic constraints, and engage emerging governance efforts such as the U.S. AI Bill of Rights. Findings reveal persistent tensions between commercial objectives and equity commitments, alongside calls for more participatory and accountable data workflows. We critically engage debates on data diversity and diversity washing, arguing that improving NLP equity requires structural governance reforms that support practitioner agency and community consent.
- Europe > Austria > Vienna (0.14)
- Oceania > New Zealand (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Law (1.00)
- Health & Medicine (1.00)
- Government (1.00)
- (2 more...)
BrisT1D Dataset: Young Adults with Type 1 Diabetes in the UK using Smartwatches
James, Sam Gordon, Armstrong, Miranda Elaine Glynis, O'Kane, Aisling Ann, Emerson, Harry, Abdallah, Zahraa S.
Background: Type 1 diabetes (T1D) has seen a rapid evolution in management technology and forms a useful case study for the future management of other chronic conditions. Further development of this management technology requires an exploration of its real-world use and the potential of additional data streams. To facilitate this, we contribute the BrisT1D Dataset to the growing number of public T1D management datasets. The dataset was developed from a longitudinal study of 24 young adults in the UK who used a smartwatch alongside their usual T1D management. Findings: The BrisT1D dataset features both device data from the T1D management systems and smartwatches used by participants, as well as transcripts of monthly interviews and focus groups conducted during the study. The device data is provided in a processed state, for usability and more rapid analysis, and in a raw state, for in-depth exploration of novel insights captured in the study. Conclusions: This dataset has a range of potential applications. The quantitative elements can support blood glucose prediction, hypoglycaemia prediction, and closed-loop algorithm development. The qualitative elements enable the exploration of user experiences and opinions, as well as broader mixed-methods research into the role of smartwatches in T1D management.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Bristol (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- Europe > Portugal > Lisbon > Lisbon (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > Strength High (0.93)
- Health & Medicine > Therapeutic Area > Endocrinology > Diabetes (1.00)
- Health & Medicine > Health Care Technology (1.00)
- Health & Medicine > Consumer Health (0.97)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Hardware (0.98)
- Information Technology > Communications (0.68)
- Information Technology > Data Science > Data Quality (0.67)
Security Benefits and Side Effects of Labeling AI-Generated Images
Höltervennhoff, Sandra, Ricker, Jonas, Raphael, Maike M., Schwedes, Charlotte, Weil, Rebecca, Fischer, Asja, Holz, Thorsten, Schönherr, Lea, Fahl, Sascha
Generative artificial intelligence is developing rapidly, impacting humans' interaction with information and digital media. It is increasingly used to create deceptively realistic misinformation, so lawmakers have imposed regulations requiring the disclosure of AI-generated content. However, only little is known about whether these labels reduce the risks of AI-generated misinformation. Our work addresses this research gap. Focusing on AI-generated images, we study the implications of labels, including the possibility of mislabeling. Assuming that simplicity, transparency, and trust are likely to impact the successful adoption of such labels, we first qualitatively explore users' opinions and expectations of AI labeling using five focus groups. Second, we conduct a pre-registered online survey with over 1300 U.S. and EU participants to quantitatively assess the effect of AI labels on users' ability to recognize misinformation containing either human-made or AI-generated images. Our focus groups illustrate that, while participants have concerns about the practical implementation of labeling, they consider it helpful in identifying AI-generated images and avoiding deception. However, considering security benefits, our survey revealed an ambiguous picture, suggesting that users might over-rely on labels. While inaccurate claims supported by labeled AI-generated images were rated less credible than those with unlabeled AI-images, the belief in accurate claims also decreased when accompanied by a labeled AI-generated image. Moreover, we find the undesired side effect that human-made images conveying inaccurate claims were perceived as more credible in the presence of labels.
- Europe > Germany > Saarland > Saarbrücken (0.40)
- North America > Mexico (0.28)
- Asia > Middle East > Syria (0.28)
- (21 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Personal > Interview (0.67)
- Media > News (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Regional Government > Asia Government (0.67)
Museums have tons of data, and AI could make it more accessible but standardizing and organizing it across fields won't be easy
Ice cores in freezers, dinosaurs on display, fish in jars, birds in boxes, human remains and ancient artifacts from long gone civilizations that few people ever see – museum collections are filled with all this and more. These collections are treasure troves that recount the planet's natural and human history, and they help scientists in a variety of different fields such as geology, paleontology, anthropology and more. What you see on a trip to a museum is only a sliver of the wonders held in their collection. Museums generally want to make the contents of their collections available for teachers and researchers, either physically or digitally. However, each collection's staff has its own way of organizing data, so navigating these collections can prove challenging.
- Africa > Middle East > Egypt > Nile Delta (0.40)
- North America > United States > Tennessee (0.06)
Stakeholder Perspectives on Whether and How Social Robots Can Support Mediation and Advocacy for Higher Education Students with Disabilities
Markelius, Alva, Bailey, Julie, Gibson, Jenny L., Gunes, Hatice
Existing power dynamics, social injustices and structural barriers may exacerbate challenges related to support and advocacy, limiting some students' ability to articulate their needs effectively [59]. This disparity highlights an increasing need for alternative approaches to student advocacy that may empower students with disabilities in ways that current practices may not. While human disability support practitioners can play a crucial role in bridging gaps between students and institutions, these efforts are resource-intensive, relying on trained personnel, availability, and sustained institutional commitment. This study explores the feasibility and ethical implications of employing artificial intelligence (AI) and in particular social robots as tools for mediation and advocacy for disabled students in higher education. While the overarching focus regards social robots and LLMs, the study adopts a broader perspective of understanding the use of technology and AI in general for disabled students, to draw insights and identify patterns that can inform the design, implementation, and ethical considerations of AI-driven assistive technologies.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.47)
- Europe > Sweden (0.28)
- Europe > Spain (0.14)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Personal > Interview (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Education > Focused Education > Special Education (1.00)
- (3 more...)
Perceived Fairness of the Machine Learning Development Process: Concept Scale Development
Mishra, Anoop, Khazanchi, Deepak
In machine learning (ML) applications, unfairness is triggered due to bias in the data, the data curation process, erroneous assumptions, and implicit bias rendered during the development process. It is also well-accepted by researchers that fairness in ML application development is highly subjective, with a lack of clarity of what it means from an ML development and implementation perspective. Thus, in this research, we investigate and formalize the notion of the perceived fairness of ML development from a sociotechnical lens. Our goal in this research is to understand the characteristics of perceived fairness in ML applications. We address this research goal using a three-pronged strategy: 1) conducting virtual focus groups with ML developers, 2) reviewing existing literature on fairness in ML, and 3) incorporating aspects of justice theory relating to procedural and distributive justice. Based on our theoretical exposition, we propose operational attributes of perceived fairness to be transparency, accountability, and representativeness. These are described in terms of multiple concepts that comprise each dimension of perceived fairness. We use this operationalization to empirically validate the notion of perceived fairness of machine learning (ML) applications from both the ML practioners and users perspectives. The multidimensional framework for perceived fairness offers a comprehensive understanding of perceived fairness, which can guide the creation of fair ML systems with positive implications for society and businesses.
- North America > United States > Nebraska > Douglas County > Omaha (0.29)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
Plurals: A System for Guiding LLMs Via Simulated Social Ensembles
Ashkinaze, Joshua, Fry, Emily, Edara, Narendra, Gilbert, Eric, Budak, Ceren
Recent debates raised concerns that language models may favor certain viewpoints. But what if the solution is not to aim for a 'view from nowhere' but rather to leverage different viewpoints? We introduce Plurals, a system and Python library for pluralistic AI deliberation. Plurals consists of Agents (LLMs, optionally with personas) which deliberate within customizable Structures, with Moderators overseeing deliberation. Plurals is a generator of simulated social ensembles. Plurals integrates with government datasets to create nationally representative personas, includes deliberation templates inspired by deliberative democracy, and allows users to customize both information-sharing structures and deliberation behavior within Structures. Six case studies demonstrate fidelity to theoretical constructs and efficacy. Three randomized experiments show simulated focus groups produced output resonant with an online sample of the relevant audiences (chosen over zero-shot generation in 75% of trials). Plurals is both a paradigm and a concrete system for pluralistic AI. The Plurals library is available at https://github.com/josh-ashkinaze/plurals and will be continually updated.
- Europe > Austria > Vienna (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Michigan (0.04)
- (11 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Law (1.00)
- Energy (1.00)
- Education (1.00)
- (2 more...)